97 research outputs found

    Named data networking for efficient IoT-based disaster management in a smart campus

    Get PDF
    Disasters are uncertain occasions that can impose a drastic impact on human life and building infrastructures. Information and Communication Technology (ICT) plays a vital role in coping with such situations by enabling and integrating multiple technological resources to develop Disaster Management Systems (DMSs). In this context, a majority of the existing DMSs use networking architectures based upon the Internet Protocol (IP) focusing on location-dependent communications. However, IP-based communications face the limitations of inefficient bandwidth utilization, high processing, data security, and excessive memory intake. To address these issues, Named Data Networking (NDN) has emerged as a promising communication paradigm, which is based on the Information-Centric Networking (ICN) architecture. An NDN is among the self-organizing communication networks that reduces the complexity of networking systems in addition to provide content security. Given this, many NDN-based DMSs have been proposed. The problem with the existing NDN-based DMS is that they use a PULL-based mechanism that ultimately results in higher delay and more energy consumption. In order to cater for time-critical scenarios, emergence-driven network engineering communication and computation models are required. In this paper, a novel DMS is proposed, i.e., Named Data Networking Disaster Management (NDN-DM), where a producer forwards a fire alert message to neighbouring consumers. This makes the nodes converge according to the disaster situation in a more efficient and secure way. Furthermore, we consider a fire scenario in a university campus and mobile nodes in the campus collaborate with each other to manage the fire situation. The proposed framework has been mathematically modeled and formally proved using timed automata-based transition systems and a real-time model checker, respectively. Additionally, the evaluation of the proposed NDM-DM has been performed using NS2. The results prove that the proposed scheme has reduced the end-to-end delay up from 2% to 10% and minimized up to 20% energy consumption, as energy improved from 3% to 20% compared with a state-of-the-art NDN-based DMS

    Cytokine gene polymorphisms and serum cytokine levels in patients with idiopathic pulmonary fibrosis

    Get PDF
    BACKGROUND: Studies have demonstrated associations between cytokine gene polymorphisms and the risk of idiopathic pulmonary fibrosis (IPF). We therefore examined polymorphisms in the genes encoding interleukin (IL)-6, IL-10, interferon gamma (IFN-γ), tumor necrosis factor alpha (TNF-α), and transforming growth factor-beta 1 (TGF-β(1)), and compared the serum levels of these cytokines in IPF patients and healthy controls. Furthermore, we examined the association of the studied genotypes and serum cytokine levels with physiological parameters and the extent of parenchymal involvement determined by high-resolution computed tomography (HRCT). METHODS: Sixty patients with IPF and 150 healthy controls were included. Cytokine genotyping was performed using the polymerase chain reaction sequence specific primer (PCR-SSP) method. In a subset of patients and controls, serum cytokine levels were determined by enzyme-linked immunosorbent assay. RESULTS: There was no difference between IPF patients and controls in the genotype and allele distributions of polymorphisms in TNF-α, IFN-γ, IL-6, IL-10, and TGF-β(1) (all p > 0.05). The TNF-α (−308) GG, IL-6 (−174) GG and CG, and IL-10 (−1082, -819, -592) ACC ATA genotypes were significantly associated with HRCT scores (all p < 0.05). IL-10 (−1082, -819, -592) ACC haplotype was associated with the diffusion capacity of the lung for carbon monoxide, and ATA haplotype was associated with the partial pressure of oxygen (PaO(2)) (all p < 0.05). The TGF-β(1) (codons 10 and 25) TC GG, TC GC, CC GG and CC GC genotypes were significantly associated with the PaO(2) and HRCT scores (p < 0.05). The TGF-β(1) (codons 10 and 25) CC GG genotype (5 patients) was significantly associated with higher PaO(2) value and less parenchymal involvement (i.e., a lower total extent score) compared to the other TGF-β(1) genotypes (81.5 ± 11.8 mm Hg vs. 67.4 ± 11.1 mm Hg, p = 0.009 and 5.60 ± 1.3 vs. 8.51 ± 2.9, p = 0.037, respectively). Significant differences were noted between patients (n = 38) and controls (n = 36) in the serum levels of IL-6 and IL-10 (both, p < 0.0001), but not in the levels of TNF-α and TGF-β(1) (both, p > 0.05). CONCLUSION: The studied genotypes and alleles do not predispose to the development of IPF but appear to play an important role in disease severity. Our results suggest that the TGF-β(1) (codons 10 and 25) CC GG genotype could be a useful genetic marker for identifying a subset of IPF patients with a favorable prognosis; however, validation in a larger sample is required

    StabTrust-A Stable and Centralized Trust-Based Clustering Mechanism for IoT Enabled Vehicular Ad-Hoc Networks

    Get PDF
    Vehicular Ad-hoc Network (VANET) is a modern era of dynamic information distribution among societies. VANET provides an extensive diversity of applications in various domains, such as Intelligent Transport System (ITS) and other road safety applications. VANET supports direct communications between vehicles and infrastructure. These direct communications cause bandwidth problems, high power consumption, and other similar issues. To overcome these challenges, clustering methods have been proposed to limit the communication of vehicles with the infrastructure. In clustering, vehicles are grouped together to formulate a cluster based on certain rules. Every cluster consists of a limited number of vehicles/nodes and a cluster head (CH). However, the significant challenge for clustering is to preserve the stability of clusters. Furthermore, a secure mechanism is required to recognize malicious and compromised nodes to overcome the risk of invalid information sharing. In the proposed approach, we address these challenges using components of trust. A trust-based clustering mechanism allows clusters to determine a trustworthy CH. The novel features incorporated in the proposed algorithm includes trust-based CH selection that comprises of knowledge, reputation, and experience of a node. Also, a backup head is determined by analyzing the trust of every node in a cluster. The major significance of using trust in clustering is the identification of malicious and compromised nodes. The recognition of these nodes helps to eliminate the risk of invalid information. We have also evaluated the proposed mechanism with the existing approaches and the results illustrate that the mechanism is able to provide security and improve the stability by increasing the lifetime of CHs and by decreasing the computation overhead of the CH re-selection. The StabTrust also successfully identifies malicious and compromised vehicles and provides robust security against several potential attacks.This work was supported by the Deanship of Scientific Research, King Saud University through the Vice Deanship of Scientific Research Chairs. The authors are grateful to the Deanship of Scientific Research, King Saud University for funding through Vice Deanship of Scientific Research Chairs

    A Bio-Inspired Heuristic Algorithm for Solving Optimal Power Flow Problem in Hybrid Power System

    Full text link
    In recent studies, emphasis has been placed on optimal power flow (OPF) problems in traditional thermal, wind, and solar energy sources-based hybrid power systems. Various metaheuristic algorithms have been proposed to find optimal solutions to the OPF problems in the hybrid power system. The OPF, due to the quadratic nature of its primary objective function, is a nonlinear, nonconvex, and quadratic optimization problem. In this study, we have proposed a bio-inspired bird swarm algorithm (BSA) to find an optimal solution to the OPF problem in the hybrid power system because it performs well in the case of optimizing the well-known Rastrigin quadratic benchmark function. In this study, uncertainty of utility load demand and stochastic electricity output from renewable energy resources (RESs) including wind and solar are incorporated into the hybrid power system for achieving accuracy in operations and planning of the system. We have used a modified IEEE-30 bus test system to verify and measure the performance of BSA and a comparison is made with well-known evolutionary metaheuristic algorithms. The proposed BSA consistently achieves more accurate and stable results than other metaheuristic algorithms. Simulation-based optimization results have shown the superiority of BSA approach to solve the OPF problems by satisfying all constraints and minimum power generation cost 863.121 $\$ /h is achieved in case study 1. Simulation-based experiment results have indicated that by imposing the carbon tax ( ton/h ton/h ) the power generation from RESs was increased. In case study 2, the proposed BSA approach has also outperformed and minimum electricity cost 890.728 $\$ /h is achieved as compared to other algorithms

    A Cost-Effective Optimization for Scheduling of Household Appliances and Energy Resources

    Full text link
    In literature, proposed approaches mostly focused on household appliances scheduling for reducing consumers' electricity bills, peak-to-average ratio, electricity usage in peak load hours, and enhancing user comfort level. The scheduling of smart home deployed energy resources recently became a critical issue on demand side due to a higher share of renewable energy sources. In this paper, a new hybrid genetic-based harmony search (HGHS) approach has been proposed for modeling the home energy management system, which contributes to minimizing consumers' electricity bills and electricity usage during peak load hours by scheduling both household appliances and smart home deployed energy resources. We have comparatively evaluated the optimization results obtained from the proposed HGHS and other approaches. The experimental results confirmed the superiority of HGHS over genetic algorithm (GA) and harmony search algorithm (HSA). The proposed HGHS scheduling approach outperformed more efficiently than HSA and GA. The electricity usage cost for completing one-day operation of household appliances was limited to 1305.7 cents, 953.65 cents, and 569.44 cents in the proposed scheduling approach for case I, case II, and case III, respectively and was observed as lower than other approaches. The electricity consumption cost was reduced upto 23.125%, 43.87% and 66.44% in case I, case II, and case III, respectively using proposed scheduling approach as compared to an unscheduled load scenario. Moreover, the electrical peak load was limited to 3.07 kW, 2.9478 kW, and 1.9 kW during the proposed HGHS scheduling approach and was reported as lower than other approaches

    Facial Expression Recognition Utilizing Local Direction-Based Robust Features and Deep Belief Network

    Get PDF
    Emotional health plays very vital role to improve people's quality of lives, especially for the elderly. Negative emotional states can lead to social or mental health problems. To cope with emotional health problems caused by negative emotions in daily life, we propose efficient facial expression recognition system to contribute in emotional healthcare system. Thus, facial expressions play a key role in our daily communications, and recent years have witnessed a great amount of research works for reliable facial expressions recognition (FER) systems. Therefore, facial expression evaluation or analysis from video information is very challenging and its accuracy depends on the extraction of robust features. In this paper, a unique feature extraction method is presented to extract distinguished features from the human face. For person independent expression recognition, depth video data is used as input to the system where in each frame, pixel intensities are distributed based on the distances to the camera. A novel robust feature extraction process is applied in this work which is named as local directional position pattern (LDPP). In LDPP, after extracting local directional strengths for each pixel such as applied in typical local directional pattern (LDP), top directional strength positions are considered in binary along with their strength sign bits. Considering top directional strength positions with strength signs in LDPP can differentiate edge pixels with bright as well as dark regions on their opposite sides by generating different patterns whereas typical LDP only considers directions representing the top strengths irrespective of their signs as well as position orders (i.e., directions with top strengths represent 1 and rest of them 0), which can generate the same patterns in this regard sometimes. Hence, LDP fails to distinguish edge pixels with opposite bright and dark regions in some cases which can be overcome by LDPP. Moreover, the LDPP capabilities are extended through principal component analysis (PCA) and generalized discriminant analysis (GDA) for better face characteristic illustration in expression. The proposed features are finally applied with deep belief network (DBN) for expression training and recognition

    Improved Resource Allocation in 5G MTC Networks

    Get PDF
    Effective resource allocation has always been one of the serious challenges in wireless communication. A considerable number of machine type communication (MTC) devices in 5G with variable quality of service (QoS) aggravates this challenge even further. Existing Resource allocation schemes in MTC are usually considering signal to noise ratio (SNR), which provides preference to MTC devices based on distance rather than their QoS requirements. This paper proposes a resource allocation scheme with dynamic priorities for MTC devices with multiple radio access technologies (RATs). The proposed resource allocation scheme has two main parts namely medium access and resource allocation. The medium access leverages the broadcast nature of wireless signal and MTC devices&apos; wait time to assign priorities using capillary band in a secure and integral way. At resource allocation, SNR, total induced transmission delay, and transmission-Awaiting MTC devices are used to assign resources in the cellular band. The rumination of two-staged dynamic priorities in the proposed scheduling scheme brings significant performance improvements in outage and success probabilities. Compared to SNR-based schemes, the proposed mechanism performs well by expressively improving the outage and success probability by 20% and 30%, respectively.1

    5G vehicular network resource management for improving radio access through machine learning

    Get PDF
    The current cellular technology and vehicular networks cannot satisfy the mighty strides of vehicular network demands. Resource management has become a complex and challenging objective to gain expected outcomes in a vehicular environment. The 5G cellular network promises to provide ultra-high-speed, reduced delay, and reliable communications. The development of new technologies such as the network function virtualization (NFV) and software defined networking (SDN) are critical enabling technologies leveraging 5G. The SDN-based 5G network can provide an excellent platform for autonomous vehicles because SDN offers open programmability and flexibility for new services incorporation. This separation of control and data planes enables centralized and efficient management of resources in a very optimized and secure manner by having a global overview of the whole network. The SDN also provides flexibility in communication administration and resource management, which are of critical importance when considering the ad-hoc nature of vehicular network infrastructures, in terms of safety, privacy, and security, in vehicular network environments. In addition, it promises the overall improved performance. In this paper, we propose a flow-based policy framework on the basis of two tiers virtualization for vehicular networks using SDNs. The vehicle to vehicle (V2V) communication is quite possible with wireless virtualization where different radio resources are allocated to V2V communications based on the flow classification, i.e., safety-related flow or non-safety flows, and the controller is responsible for managing the overall vehicular environment and V2X communications. The motivation behind this study is to implement a machine learning-enabled architecture to cater the sophisticated demands of modern vehicular Internet infrastructures. The inclination towards robust communications in 5G-enabled networks has made it somewhat tricky to manage network slicing efficiently. This paper also presents a proof of concept for leveraging machine learning-enabled resource classification and management through experimental evaluation of special-purpose testbed established in custom mininet setup. Furthermore, the results have been evaluated using Long Short-Term Memory (LSTM), Convolutional Neural Network (CNN), and Deep Neural Network (DNN). While concluding the paper, it is shown that the LSTM has outperformed the rest of classification techniques with promising results.King Saud Universit
    corecore